9 research outputs found
Deep learning-based denoising streamed from mobile phones improves speech-in-noise understanding for hearing aid users
The hearing loss of almost half a billion people is commonly treated with
hearing aids. However, current hearing aids often do not work well in
real-world noisy environments. We present a deep learning based denoising
system that runs in real time on iPhone 7 and Samsung Galaxy S10 (25ms
algorithmic latency). The denoised audio is streamed to the hearing aid,
resulting in a total delay of around 75ms. In tests with hearing aid users
having moderate to severe hearing loss, our denoising system improves audio
across three tests: 1) listening for subjective audio ratings, 2) listening for
objective speech intelligibility, and 3) live conversations in a noisy
environment for subjective ratings. Subjective ratings increase by more than
40%, for both the listening test and the live conversation compared to a fitted
hearing aid as a baseline. Speech reception thresholds, measuring speech
understanding in noise, improve by 1.6 dB SRT. Ours is the first denoising
system that is implemented on a mobile device, streamed directly to users'
hearing aids using only a single channel as audio input while improving user
satisfaction on all tested aspects, including speech intelligibility. This
includes overall preference of the denoised and streamed signal over the
hearing aid, thereby accepting the higher latency for the significant
improvement in speech understanding
True self-configuration for the IoT
For the Internet of Things to finally become a reality, obstacles on different levels need to be overcome. This is especially true for the upcoming challenge of leaving the domain of technical experts and scientists. Devices need to connect to the Internet and be able to offer services. They have to announce and describe these services in machine-understandable ways so that user-facing systems are able to find and utilize them. They have to learn about their physical surroundings, so that they can serve sensing or acting purposes without explicit configuration or programming. Finally, it must be possible to include IoT devices in complex systems that combine local and remote data, from different sources, in novel and surprising ways. We show how all of that is possible today. Our solution uses open standards and state-of-the art protocols to achieve this. It is based on 6LowPAN and CoAP for the communications part, semantic web technologies for meaningful data exchange, autonomous sensor correlation to learn about the environment, and software built around the Linked Data principles to be open for novel and unforeseen applications